Shallow Thoughts : tags : open source
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Thu, 22 Dec 2016
I wrote two recent articles on Python packaging:
Distributing Python Packages Part I: Creating a Python Package
and
Distributing Python Packages Part II: Submitting to PyPI.
I was able to get a couple of my programs packaged and submitted.
Ongoing Development and Testing
But then I realized all was not quite right. I could install new releases of
my package -- but I couldn't run it from the source directory any more.
How could I test changes without needing to rebuild the package for
every little change I made?
Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a
directory that includes all the modules you normally want to test.
For example, inside my bin directory I have a python directory
where I can symlink any development modules I might need:
mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/
Then add the directory at the beginning of PYTHONPATH:
export PYTHONPATH=$HOME/bin/python
With that, I could test from the development directory again,
without needing to rebuild and install a package every time.
Cleaning up files used in building
Building a package leaves some extra files and directories around,
and git status
will whine at you since they're not
version controlled. Of course, you could gitignore them, but it's
better to clean them up after you no longer need them.
To do that, you can add a clean
command to setup.py.
from setuptools import Command
class CleanCommand(Command):
"""Custom clean command to tidy up the project root."""
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just
cleaning up after package building. Adjust the list as needed.)
Then in the setup()
function, add these lines:
cmdclass={
'clean': CleanCommand,
}
Now you can type
python setup.py clean
and it will remove all the extra files.
Keeping version strings in sync
It's so easy to update the __version__ string in your module and
forget that you also have to do it in setup.py, or vice versa.
Much better to make sure they're always in sync.
I found several version of that using system("grep...")
,
but I decided to write my own that doesn't depend on system()
.
(Yes, I should do the same thing with that CleanCommand, I know.)
def get_version():
'''Read the pytopo module versions from pytopo/__init__.py'''
with open("pytopo/__init__.py") as fp:
for line in fp:
line = line.strip()
if line.startswith("__version__"):
parts = line.split("=")
if len(parts) > 1:
return parts[1].strip()
Then in setup()
:
version=get_version(),
Much better! Now you only have to update __version__ inside your module
and setup.py will automatically use it.
Using your README for a package long description
setup has a long_description for the package, but you probably
already have some sort of README in your package. You can use it for
your long description this way:
# Utility function to read the README file.
# Used for the long_description.
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
long_description=read('README'),
Tags: programming, python, open source
[
10:15 Dec 22, 2016
More programming |
permalink to this entry |
]
Sat, 17 Dec 2016
In
Part
I, I discussed writing a setup.py
to make a package you can submit to PyPI.
Today I'll talk about better ways of testing the package,
and how to submit it so other people can install it.
Testing in a VirtualEnv
You've verified that your package installs. But you still need to test
it and make sure it works in a clean environment, without all your
developer settings.
The best way to test is to set up a "virtual environment", where you can
install your test packages without messing up your regular runtime
environment. I shied away from virtualenvs for a long time, but
they're actually very easy to set up:
virtualenv venv
source venv/bin/activate
That creates a directory named venv under the current directory,
which it will use to install packages.
Then you can pip install packagename
or
pip install /path/to/packagename-version.tar.gz
Except -- hold on! Nothing in Python packaging is that easy.
It turns out there are a lot of packages that won't install inside
a virtualenv, and one of them is PyGTK, the library I use for my
user interfaces. Attempting to install pygtk inside a venv gets:
********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file. *
********************************************************************
Windows only? Seriously? PyGTK works fine on both Linux and Mac;
it's packaged on every Linux distribution, and on Mac it's packaged
with GIMP. But for some reason, whoever maintains the PyPI PyGTK
packages hasn't bothered to make it work on anything but Windows,
and PyGTK seems to be mostly an orphaned project so that's not likely
to change.
(There's a package called ruamel.venvgtk that's supposed to work around
this, but it didn't make any difference for me.)
The solution is to let the virtualenv use your system-installed packages,
so it can find GTK and other non-PyPI packages there:
virtualenv --system-site-packages venv
source venv/bin/activate
I also found that if I had a ~/.local directory (where packages
normally go if I use pip install --user packagename
),
sometimes pip would install to .local instead of the venv. I never
did track down why this happened some times and not others, but when
it happened, a temporary
mv ~/.local ~/old.local
fixed it.
Test your Python package in the venv until everything works.
When you're finished with your venv, you can run deactivate
and then remove it with rm -rf venv
.
Tag it on GitHub
Is your project ready to publish?
If your project is hosted on GitHub, you can have pypi download it
automatically. In your setup.py, set
download_url='https://github.com/user/package/tarball/tagname',
Check that in. Then make a tag and push it:
git tag 0.1 -m "Name for this tag"
git push --tags origin master
Try to make your tag match the version you've set in setup.py and
in your module.
Push it to pypitest
Register a new account and password on both
pypitest
and on pypi.
Then create a ~/.pypirc that looks like this:
[distutils]
index-servers =
pypi
pypitest
[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD
[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD
Yes, those passwords are in cleartext. Incredibly, there doesn't seem
to be a way to store an encrypted password or even have it prompt you.
There are tons of complaints about that all over the web but nobody
seems to have a solution.
You can specify a password on the command line, but that's not much better.
So use a password you don't use anywhere else and don't mind too much
if someone guesses.
Update: Apparently there's a newer method called twine that solves the
password encryption problem. Read about it here:
Uploading your project to PyPI.
You should probably use twine instead of the setup.py commands discussed
in the next paragraph.
Now register your project and upload it:
python setup.py register -r pypitest
python setup.py sdist upload -r pypitest
Wait a few minutes: it takes pypitest a little while before new packages
become available.
Then go to your venv (to be safe, maybe delete the old venv and create a
new one, or at least pip uninstall) and try installing:
pip install -i https://testpypi.python.org/pypi YourPackageName
If you get "No matching distribution found for packagename",
wait a few minutes then try again.
If it all works, then you're ready to submit to the real pypi:
python setup.py register -r pypi
python setup.py sdist upload -r pypi
Congratulations! If you've gone through all these steps, you've uploaded
a package to pypi. Pat yourself on the back and go tell everybody they
can pip install
your package.
Some useful reading
Some pages I found useful:
A great tutorial except that it forgets to mention signing up for an account:
Python
Packaging with GitHub
Another good tutorial:
First
time with PyPI
Allowed PyPI classifiers
-- the categories your project fits into
Unfortunately there aren't very many of those, so you'll probably be
stuck with 'Topic :: Utilities' and not much else.
Python
Packages and You: not a tutorial, but a lot of good advice on style
and designing good packages.
Tags: programming, python, open source
[
16:19 Dec 17, 2016
More programming |
permalink to this entry |
]
Sun, 11 Dec 2016
I write lots of Python scripts that I think would be useful to other
people, but I've put off learning how to submit to the Python Package Index,
PyPI, so that my packages can be installed using pip install
.
Now that I've finally done it, I see why I put it off for so long.
Unlike programming in Python, packaging is a huge, poorly documented
hassle, and it took me days to get a working.package. Maybe some of the
hints here will help other struggling Pythonistas.
Create a setup.py
The setup.py file is the file that describes the files in your
project and other installation information.
If you've never created a setup.py before,
Submitting a Python package with GitHub and PyPI
has a decent example, and you can find lots more good examples with a
web search for "setup.py", so I'll skip the basics and just mention
some of the parts that weren't straightforward.
Distutils vs. Setuptools
However, there's one confusing point that no one seems to mention.
setup.py
examples all rely on a predefined function
called setup
, but some examples start with
from distutils.core import setup
while others start with
from setuptools import setup
In other words, there are two different versions of setup
!
What's the difference? I still have no idea. The setuptools
version seems to be a bit more advanced, and I found that using
distutils.core
, sometimes I'd get weird errors when
trying to follow suggestions I found on the web. So I ended up using
the setuptools
version.
But I didn't initially have setuptools installed (it's not part of the
standard Python distribution), so I installed it from the Debian package:
apt-get install python-setuptools python-wheel
The python-wheel
package isn't strictly needed, but I
found I got assorted warnings warnings from pip install
later in the process ("Cannot build wheel") unless I installed it, so
I recommend you install it from the start.
Including scripts
setup.py has a scripts option to include scripts that
are part of your package:
scripts=['script1', 'script2'],
But when I tried to use it, I had all sorts of problems, starting with
scripts not actually being included in the source distribution. There
isn't much support for using scripts
-- it turns out
you're actually supposed to use something called
console_scripts
, which is more elaborate.
First, you can't have a separate script file, or even a __main__
inside an existing class file. You must have a function, typically
called main(), so you'll typically have this:
def main():
# do your script stuff
if __name__ == "__main__":
main()
Then add something like this to your setup.py
:
entry_points={
'console_scripts': [
script1=yourpackage.filename:main',
script2=yourpackage.filename2:main'
]
},
There's a secret undocumented alternative that a few people use
for scripts with graphical user interfaces: use 'gui_scripts' rather
than 'console_scripts'. It seems to work when I try it, but the fact
that it's not documented and none of the Python experts even seem to
know about it scared me off, and I stuck with 'console_scripts'.
Including data files
One of my packages, pytopo, has a couple of files it needs to install,
like an icon image. setup.py
has a provision for that:
data_files=[('/usr/share/pixmaps', ["resources/appname.png"]),
('/usr/share/applications', ["resources/appname.desktop"]),
('/usr/share/appname', ["resources/pin.png"]),
],
Great -- except it doesn't work. None of the files actually gets added
to the source distribution.
One solution people mention to a "files not getting added" problem is
to create an explicit MANIFEST file listing all files that need to be
in the distribution. Normally, setup generates the MANIFEST automatically,
but apparently it isn't smart enough to notice data_files
and include those in its generated MANIFEST.
I tried creating a MANIFEST listing all the .py files plus
the various resources -- but it didn't make any difference. My
MANIFEST was ignored.
The solution turned out to be creating a MANIFEST.in file, which is
used to generate a MANIFEST. It's easier than creating the MANIFEST
itself: you don't have to list every file, just patterns that describe
them:
include setup.py
include packagename/*.py
include resources/*
If you have any scripts that don't use the extension .py,
don't forget to include them as well. This may have been why
scripts=
didn't work for me earlier, but by the time
I found out about MANIFEST.in I had already switched to using
console_scripts
.
Testing setup.py
Once you have a setup.py, use it to generate a source distribution with:
python setup.py sdist
(You can also use bdist to generate a binary distribution, but you'll
probably only need that if you're compiling C as part of your package.
Source dists are apparently enough for pure Python packages.)
Your package will end up in dist/packagename-version.tar.gz
so you can use tar tf dist/packagename-version.tar.gz
to verify what files are in it. Work on your setup.py until you
don't get any errors or warnings and the list of files looks right.
Congratulations -- you've made a Python package!
I'll post a followup article in a day or two about more ways of testing,
and how to submit your working package to PyPI.
Update: Part II is up:
Distributing Python Packages Part II: Submitting to PyPI.
Tags: programming, python, open source
[
12:54 Dec 11, 2016
More programming |
permalink to this entry |
]
Fri, 27 Nov 2015
Keeping up with source trees for open source projects, it often
happens that you pull the latest source, type make
,
and get an error like this (edited for brevity):
$ make
cd . && /bin/sh ./missing --run aclocal-1.14
missing: line 52: aclocal-1.14: command not found
WARNING: aclocal-1.14' is missing on your system. You should only need it if you modified acinclude.m4' or configure.ac'. You might want to install the Automake' and Perl' packages. Grab them from any GNU archive site.
What's happening is that make
is set up to run ./autogen.sh
(similar to running ./configure except it does some other stuff tailored
to people who build from the most current source tree) automatically
if anything has changed in the tree. But if the version of aclocal has
changed since the last time you ran autogen.sh or configure, then
running configure with the same arguments won't work.
Often, running a make distclean
, to clean out all local
configuration in your tree and start from scratch, will fix the problem.
A simpler make clean
might even be enough. But when you
try it, you get the same aclocal error.
Whoops! make clean
runs make
, which
triggers the rule that configure has to run before make, which fails.
It would be nice if the make
rules were smart enough to
notice this and not require configure or autogen if the make target
is something simple like clean
or distclean
.
Alas, in most projects, they aren't.
But it turns out that even if you can't run autogen.sh with your
usual arguments -- e.g. ./autogen.sh --prefix=/usr/local/gimp-git
-- running ./autogen.sh
by itself with no extra arguments
will often fix the problem.
This happens to me often enough with the GIMP source tree that I made
a shell alias for it:
alias distclean="./autogen.sh && ./configure && make clean"
Saving your configure arguments
Of course, this wipes out any arguments you've previously passed to
autogen and configure. So assuming this succeeds, your very next
action should be to run autogen again with the arguments you actually
want to use, e.g.:
./autogen.sh --prefix=/usr/local/gimp-git
Before you ran the distclean, you could get those arguments by looking
at the first few lines of config.log. But after you've run distclean,
config.log is gone -- what if you forgot to save the arguments first?
Or what if you just forget that you need to re-run autogen.sh again
after your distclean?
To guard against that, I wrote a somewhat more complicated shell function
to use instead of the simple alias I listed above.
The first trick is to get the arguments you previously passed to
configure. You can parse them out of config.log:
$ egrep '^ \$ ./configure' config.log
$ ./configure --prefix=/usr/local/gimp-git --enable-foo --disable-bar
Adding a bit of sed to strip off the beginning of the command,
you could save the previously used arguments like this:
args=$(egrep '^ \$ ./configure' config.log | sed 's_^ \$ ./configure __')
(There's a better place for getting those arguments,
config.status -- but parsing them from there is a bit more
complicated, so I'll follow up with a separate article on that,
chock-full of zsh goodness.)
So here's the distclean shell function, written for zsh:
distclean() {
setopt localoptions errreturn
args=$(egrep '^ \$ ./configure' config.log | sed 's_^ \$ ./configure __')
echo "Saved args:" $args
./autogen.sh
./configure
make clean
echo
echo "==========================="
echo "Running ./autogen.sh $args"
sleep 3
./autogen.sh $args
}
The setopt localoptions errreturn
at the beginning is a
zsh-ism that tells the shell to exit if there's an error.
You don't want to forge ahead and run configure and make clean
if your autogen.sh didn't work right.
errreturn does much the same thing as the
&& between the commands in the simpler shell alias above,
but with cleaner syntax.
If you're using bash, you could string all the commands on one line instead,
with && between them, something like this:
./autogen.sh && ./configure && make clean && ./autogen.sh $args
Or perhaps some bash user will tell me of a better way.
Tags: gimp, programming, open source, shell
[
13:33 Nov 27, 2015
More programming |
permalink to this entry |
]
Thu, 22 Oct 2015
I went to a night sky photography talk on Tuesday. The presenter
talked a bit about tips on camera lenses, exposures; then showed
a raw image and prepared to demonstrate how to process it to bring
out the details.
His slides disappeared, the screen went blank, and then ... nothing.
He wrestled with his laptop for a while. Finally he said "Looks like
I'm going to need a network connection", left the podium and headed
out the door to find someone to help him with that.
I'm not sure what the networking issue was:
the nature center has open wi-fi, but you know how it is during talks:
if anything can possibly go wrong with networking, it will, which is
why a good speaker tries not to rely on it. And I'm not blaming this
speaker, who had clearly done plenty of preparation and thought
he had everything lined up.
Eventually they got the network connection, and he connected to Adobe.
It turns out the problem was that Adobe Photoshop is now cloud-based.
Even if you have a local copy of the software, it insists on checking
in with Adobe at least every 30 days. At least, that's the theory.
But he had used the software on that laptop earlier that same day,
and thought he was safe. But that wasn't good enough, and Photoshop
picked the worst possible time -- a talk in front of a large audience
-- to decide it needed to check in before letting him do anything.
Someone sitting near me muttered "I'd been thinking about buying that,
but now I don't think I will." Someone else told me afterward that
all Photoshop is now cloud-based; older versions still work,
but if you buy Photoshop now, your only option is this cloud version
that may decide ... at the least opportune moment ... that you can't
use your software any more.
I'm so glad I use Free software like GIMP. Not that things can't go
wrong giving a GIMP talk, of course. Unexpected problems or bugs can
arise with any software, and you take that risk any time you give a
live demo.
But at least with Free, open source software like GIMP, you know you
own the software and it's not suddenly going to refuse to run without
a license check. That sort of freedom is what makes the difference
between free as in beer, and Free as in speech.
You can practice your demo carefully before the talk
to guard against most bugs and glitches; but all the practice in the
world won't guard against software that won't start.
I talked to the club president afterward and offered to give a GIMP
talk to the club some time soon, when their schedule allows.
Tags: linux, gimp, open source, photoshop
[
10:24 Oct 22, 2015
More gimp |
permalink to this entry |
]
Mon, 16 May 2011
Update and warning: My bzr diff was not accepted. It turns
out this particular package doesn't accept that format. Apparently
different packages within Ubuntu require different types of patches,
and there's no good way to find out besides submitting one type of
patch and seeing if it's rejected or ignored. In the end, I did get
a patch accepted, and will write up separately how that patch was
generated.
The process of submitting bugs and patches to Ubuntu can be deeply
frustrating. Even if you figure out how to fix a bug and attach a patch,
the patch can sit in Launchpad for years with no attention, as this
ubuntu-devel-discuss
thread attests.
The problem is that there are a lot of bugs and not enough people
qualified to review patches and check them in. To make things easier
for the packagers, sometimes people are told to "make a debdiff" or
"make a ppa".
But it's tough to find good instructions on how to do these things.
There are partial instructions at
Contributing
and on the
Packaging Guide
-- but both pages are aimed at people who want to become regular
packagers of new apps, not someone who just has one patch for a specific bug,
and they're both missing crucial steps. Apparently there's a new and better
packaging guide being written, but it's not publically available yet.
These days, Bazaar (bzr), not debdiff, is considered the best way to
make a patch easy for Ubuntu developers to review.
With a lot of help from #ubuntu-women, and particularly
Maco (THANKS!),
I worked through the steps to submit a patch I'd posted to
bug
370735 two years ago for gmemusage.
Here's what I needed to do.
Set up the tools
First, install some build tools you'll need, if you don't already have them:
sudo apt-get install bzr bzr-builddeb pbuilder
You will also need a Launchpad account:
and connect bzr to your Launchpad account:
bzr whoami "Firstname Lastname <yourname@example.com>"
bzr launchpad-login your-acct
Check out the code
Create a directory where you'll do the work:
mkdir pkgname
cd pkgname
Check out the source from bzr:
bzr branch lp:ubuntu/pkgname pkgname
Make a bzr branch for your fixes. It's probably a good idea to include the
bug number or other specifics in the branch name:
bzr branch pkgname pkgname-fix-bugnum
cd pkgname-fix-bugnum
Now you can apply the patch, e.g. patch <../mypatch.diff
,
or edit source files directly.
Make a package you can test
Making a package from a bzr directory requires several steps.
Making a source package is easy:
bzr bd -S -- -uc -us
This will show up as ../pkgname_version.dsc.
But if you want something you can install and test, you need a binary package.
That's quite a bit more trouble to generate.
You'll be using pbuilder to create a minimal install of Ubuntu in a chroot
environment, so the build isn't polluted by any local changes you have
on your own machine.
First create the chroot: this takes a while, maybe 10 minutes or so, or
a lot longer if you have a slow network connection. You'll also need some
disk space: on my machine it used 168M in /var/cache (plus more for
the next step). Since it uses /var/cache, it needs sudo to write there:
sudo pbuilder --create natty
Now build a .deb binary package from your .dsc source package:
sudo pbuilder --build ../pkgname_version.dsc
pbuilder will install a bunch of additional packages, like X and other
libraries that are needed to build your package but weren't included
in the minimal pbuilder setup.
And then once it's done with the build, it removes them all again.
Apparently there's a way to make it cache them so you'll have them
if you need to build again, but I'm not sure how.
pbuilder --build
gives lots of output, but none of that
output tells you where it's actually creating the .deb.
Look in /var/cache/pbuilder/result for it.
And now you can finally try installing it:
sudo dpkg -i /var/cache/pbuilder/result/pkgname_blahblah.deb
You can now test your fix, and make sure
you fixed the problem and didn't break anything else.
Check in your bzr branch
Once you're confident your fix is good. it's time to check it in.
Make a new changelog entry:
dch -i
This will open your editor of choice, where you should explain briefly
what you changed and why. If it's a fix for a Launchpad bug,
list the bug number like this:
(LP: #370735).
If you're proposing a fix for an Ubuntu that's already released,
you also need to add -proposed to the release name in the top
line in the changelog, e.g.:
pkgname (0.2-11ubuntu1) natty-proposed; urgency=low
Also, pay attention to that ubuntu1 part of the version string
if the entry prior to yours doesn't include "ubuntu" in the version.
If you're proposing a change to a stable release, change that to
ubuntu0.1; if it's for the current development release, it's
okay to leave it at ubuntu1 (more details on this
Packaging
page).
Finally, you can check it in to your local repository:
debcommit
and push it to Launchpad:
bzr push lp:~yourname/ubuntu/natty/pkgname/pkgname-fix-bugnum
Notify possible sponsors
You'll want to make sure your patch gets on the sponsorship queue,
so someone can review it and check in the fix.
bzr lp-open
(For me, this opened chromium even though firefox is my preferred browser.
To use Firefox, I had to:
sudo update-alternatives --config x-www-browser
first.
Boo chromium for making itself default without asking me.)
You should now have a launchpad page open in your browser. Click on
"Propose for merging into another branch" and include a description of
your change and why it should be merged. This, I'm told, notifies potential
sponsors who can review your patch and approve it for check-in.
Whew! That's a lot of steps. You could argue that it's harder to prepare
a patch for Ubuntu than it was to fix the bug in the first place.
Stay tuned ... I'll let you know when and if my patch actually gets approved.
Tags: ubuntu, bugs, open source
[
15:38 May 16, 2011
More linux |
permalink to this entry |
]
Sat, 06 Feb 2010
I had the opportunity to participate in a focus group on NASA's new
"citizen science" project, called Moon Zoo, with a bunch of other
fellow lunatics, amateur astronomers and lunar enthusiasts.
Moon Zoo sounds really interesting. Ordinary people will
analyze high-resolution photos of the lunar surface: find out how many
boulders and craters are there. I hope it will also include more
details like crater type and size, rilles and so forth, though that
wasn't mentioned. These are all tasks that are easy for a human and
hard for a computer: perfect for crowdsourcing.
Think Galaxy Zoo for the moon.
The resulting data will be used for planning future lunar missions as
well as for general lunar science.
It sounds like a great project and I'm excited about it. But
I'm not going to write about Moon Zoo today -- it doesn't
exist yet (current estimate is mid-March), though there is a
preliminary
PDF.
Instead, I want to talk about some of the great ideas that came
out of the focus group.
The primary question: How do we get people -- both amateur astronomers
and the general public, people of all ages -- interested in
contributing to a citizen science project like Moon Zoo?
Here are some of the key ideas:
Make the data public
This was the most important point, echoed by a lot of participants.
Some people felt that many of the existing "citizen science" projects
project the attitude "We want something from you, but we're not going to give
you anything in return." If you use crowdsourcing to create a dataset,
make it available to the crowd.
Opening the data has a lot of advantages:
- People can make "mashups", useful sites that display your data
in useful ways or combine it with other data. This can generate
more interest in your project and more contributors.
- School groups can work on class projects or science fair projects,
probably contributing more data along the way.
- It might help the next generation of scientist get started.
- It shows openness and good faith: witness the recent blow-up over
the leaked IPCC emails and the debate over how much climate data has
been kept private.
Projects like
Wikipedia and
Open Street Map,
as well as Linux and the rest of the open source movement,
show how much an open data model can inspire contributions.
Give credit to individuals and teams
People cited the example of SETI@Home, where teams of contributors can
compete to see who's contributed the most. Show rankings for both
individuals and groups, so they can track their progress and maybe
get a bit competitive with other groups. Highlight groups
and individuals who contribute a lot -- maybe even make it a formal
competition and offer inexpensive prizes like T-shirts or mugs.
A teenaged panel member had the great suggestion of making
buttons that said "I'm a Moon Zookeeper." Little rewards like that
don't cost much but can really motivate people.
Offer an offline version
They wanted to hear ideas for publicizing Moon Zoo to groups like
our local astronomy clubs.
I mentioned that I've often wanted to spread the word about Galaxy Zoo,
but it's entirely a web-based application and when I give talks to clubs
or school groups, web access is never an option. (Ironically, the person
leading the focus group had planned to demonstrate Galaxy Zoo to us but
couldn't get connected to the wi-fi at the Lawrence Hall of Science.)
Projects are so much easier to evangelize if you can download
an offline demo.
And not just a demo, either. There should be a way to download a
real version, including a small data set. Imagine if you could grab a
Moon Zoo pack and do a little classifying whenever you got a few spare
minutes -- on the airplane or train, or in a hotel room while traveling.
Important note: this does not mean you should write a separate
Windows app for people to download. Keep it HTML, Javascript and cross
platform so everyone can run it. Then let people download a local copy
of the same web app they run on your site.
Make sure it works on phones and game consoles
Lots of people use smartphones more than they use a desktop computer
these days. Make sure the app runs on all the popular smartphones.
And lots of kids have access to handheld web-enabled game consoles:
you can reach a whole new set of kids by supporting these platforms.
Offer levels of accomplishment, like a game
Lots of people are competitive by nature, and like to feel they're
getting better at what they're doing. Play to that: let users advance
as they get more experienced, and give them the option of
doing harder projects. "I'm up to level 7 in Moon Zoo!"
Use social networking
Facebook. Twitter. Nuff said.
Don't keep results a secret
Quite a few scientific publications have arisen out of Galaxy Zoo --
yet although most of us were familiar with Galaxy Zoo, few of us
knew that. Why so secretive?
They should be trumpeting achievements like that.
How many times have you volunteered for a survey or study, then
wondered for years afterward how the results came out? Researchers
never contact the volunteers when the paper is finally published.
It's frustrating and demotivating; it makes you not want to volunteer
again. Lots of us sign up because we're curious about the science --
but that means we're also curious about the results.
With citizen science projects, this is particularly easy. Set up a
mailing list or forum (or both) to discuss results and announce when
papers are published. Set up a Twitter account and a Facebook group
to announce new papers to anyone who wants to follow. This is the age of
Web 2.0, folks -- there's no excuse for not communicating.
I don't know if NASA will listen to our ideas. But I hope they do.
Moon Zoo promises to be a terrific project ... and the more of these
principles they follow, the more dedicated volunteers they'll get and
that will make the project even better.
Tags: science, astronomy, open source, crowdsourcing
[
20:25 Feb 06, 2010
More science/astro |
permalink to this entry |
]
Sun, 06 Sep 2009
Someone was asking for help building XEphem on the XEphem mailing list.
It was a simple case of a missing include file, where the only trick
is to find out what package you need to install to get that file.
(This is complicated on Ubuntu, which the poster was using,
by the way they fragment the X developement headers into a maze of
a xillion tiny packages.)
The solution -- apt-file -- is so simple and easy to use, and yet
a lot of people don't know about it. So here's how it works.
The poster reported getting these compiler errors:
ar rc libz.a adler32.o compress.o crc32.o uncompr.o deflate.o trees.o zutil.o inflate.o inftrees.o inffast.o
ranlib libz.a
make[1]: Leaving directory `/home/gregs/xephem-3.7.4/libz'
gcc -I../../libastro -I../../libip -I../../liblilxml -I../../libjpegd -I../../libpng -I../../libz -g -O2 -Wall -I../../libXm/linux86 -I/usr/X11R6/include -c -o aavso.o aavso.c
In file included from aavso.c:12:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:57:23: error: X11/Shell.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:58:23: error: X11/Xatom.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:59:34: error: X11/extensions/Print.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:1373: error: expected `=', `,', `;', `asm' or `__attribute__' before `char'
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:5439:28: error: X11/StringDefs.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:61,
from aavso.c:12:
../../libXm/linux86/Xm/VirtKeys.h:108: error: expected `)' before `*' token
In file included from ../../libXm/linux86/Xm/Display.h:49,
from ../../libXm/linux86/Xm/DragC.h:48,
from ../../libXm/linux86/Xm/Transfer.h:44,
from ../../libXm/linux86/Xm/Xm.h:62,
from aavso.c:12:
../../libXm/linux86/Xm/DropSMgr.h:88: error: expected specifier-qualifier-list before `XEvent'
../../libXm/linux86/Xm/DropSMgr.h:100: error: expected specifier-qualifier-list before `XEvent'
How do you go about figuring this out?
When interpreting compiler errors, usually what matters is the
*first* error. So try to find that. In the transcript above, the first
line saying "error:" is this one:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
So the first problem is that the compiler is trying to find a file
called Intrinsic.h that isn't installed.
On Debian-based systems, there's a great program you can use to find
files available for install: apt-file. It's not installed by default,
so install it, then update it, like this (the update will take a long time):
$ sudo apt-get install apt-file
$ sudo apt-file update
Once it's updated, you can now find out what package would install a
file like this:
$ apt-file search Intrinsic.h
libxt-dev: /usr/include/X11/Intrinsic.h
tendra: /usr/lib/TenDRA/lib/include/x5/t.api/X11/Intrinsic.h
In this case two two packages could install a file by that name.
You can usually figure out from looking which one is the
"real" one (usually the one with the shorter name, or the one
where the package name sounds related to what you're trying to do).
If you're stil not sure, try something like
apt-cache show libxt-dev tendra
to find out more
about the packages involved.
In this case, it's pretty clear that tendra is a red herring,
and the problem is likely that the libxt-dev package is missing.
So apt-get install libxt-dev
and try the build again.
Repeat the process until you have everything you need for the build.
Remember apt-file if you're not already using it.
It's tremendously useful in tracking down build dependencies.
Tags: open source, linux, programming, debian, ubuntu
[
11:25 Sep 06, 2009
More linux |
permalink to this entry |
]
Fri, 12 Jun 2009
My last Toastmasters speech was on open formats: why you should use
open formats rather than closed/proprietary ones and the risks of
closed formats.
To make it clearer, I wanted to print out handouts people could take home
summarizing some of the most common closed formats, along with
open alternatives.
Surely there are lots of such tables on the web, I thought.
I'll just find one and customize it a little for this specific audience.
To my surprise, I couldn't find a single one. Even
openformats.org didn't
have very much.
So I started one:
Open vs. Closed Formats.
It's far from complete, so
I hope I'll continue to get contributions to flesh it out more.
And the talk? It went over very well, and people appreciated the
handout. There's a limit to how much information you can get across
in under ten minutes, but I think I got the point across.
The talk itself, such as it is, is here:
Open up!
Tags: tech, formats, open source, toastmasters
[
11:37 Jun 12, 2009
More tech |
permalink to this entry |
]
Sat, 03 Jan 2009
Latest obsession: mapping with
OpenStreetMap.
Last month, OpenStreetMap and its benefactor company
CloudMade
held a "mapping party" in Palo Alto. I love maps and mapping (I wrote
my own little topographic
map viewer when I couldn't find one ready-made) and I've been
wanting to know more about the state of open source mapping.
A mapping party sounded perfect.
The party was a loosely organized affair. We met at a coffeehouse
and discussed basics of mapping and openstreetmap. The hosts tried
to show us newbies how OSM works, but that was complicated by the
coffeehouse's wireless net being down. No big deal -- turns out the
point of a mapping party is to hand out GPSes to anyone who doesn't
already have one and send us out to do some mapping.
I attached myself to a couple of CloudMade folks who had some
experience already and we headed north on a pedestrian path. We spent
a couple of hours walking urban trails and marking waypoints.
Then we all converged on a tea shop (whose wireless worked a little
better than the one at the coffeehouse, but still not very reliably)
for lunch and transfer of track and waypoint files.
This part didn't work all that well. It turned out the units we were
using (Garmin Legend HCx) can transfer files in two modes, USB
mass storage (the easy way, just move files as if from an external
disk) or USB Garmin protocol (the hard way: you have to use software
like gpsbabel, or the Garmin software if you're on Windows).
And in mass storage mode, you get a file but the waypoints aren't there.
The folks running the event all had Macs, and there were several Linux
users there as well, but no Windows laptops. By the time the Macs both
had gpsbabel downloaded over the tea shop's flaky net, it was past
time for me to leave, so I never did get to see our waypoint files.
Still, I could see it was possible (and one of the Linux attendees
assured me that he had no trouble with any of the software; in fact,
he found it easier than what the Mac people at the party were going
through).
But I was still pretty jazzed about how easy OpenStreetMap is
to use. You can contribute to the maps even without a GPS.
Once you've registered on the site, you just click on the Edit tab
on any map, and you see a flash application called "Potlatch" that
lets you mark trails, roads or other features based on satellite
images or the existing map. I was able to change a couple of mismarked
roads near where I live, as well as adding a new trail and correcting
the info on an existing one for one of the nearby parks.
If you prefer (as, I admit, I do) to work offline or don't like flash,
you can use a Java app, JOSM, or a native app, merkaartor. Very cool!
Merkaartor is my favorite so far (because it's faster and works
better in standalone mode) though it's still fairly rough around
the edges. They're all described on the OSM
Map Editing
page.
Of course, all this left me lusting after a GPS. But that's another
story, to be told separately.
Tags: gps, GIS, mapping, open source
[
13:00 Jan 03, 2009
More mapping |
permalink to this entry |
]
Fri, 04 Jul 2008
Oops! Right after I posted that last entry, I discovered that my
little kitfox extension wasn't working as well as I'd thought.
And the more I hacked it, the less well it worked, and the more
I discovered was missing, like a chrome.manifest file (which
firefox 2 hadn't seemed to need).
Eventually some very helpful folks on #extdev pointed me to
Ted Mielczarek's excellent Extension
Wizard. Give it some details about your extension (its name and
version, your name, and a couple things you might want like a
toolbar button, a prefs panel and a context menu) and it generates
a zipped directory containing a bare bones extension, even including
niceties like internationalized strings.
Even better, your new extension skeleton includes a readme that
tells you how to leave the extension expanded while you work on
it. That's quite a bit easier than building the XPI file and installing
it each time.
So kitfox has a
0.3 version (in the unlikely event that anybody besides me wants it).
There's a project called
fizzypop
to develop and extend useful Mozilla dev tools like the Extension Wizard ...
watch that space for more details.
Tags: mozilla, firefox, open source
[
21:12 Jul 04, 2008
More tech/web |
permalink to this entry |
]